8 research outputs found

    Effective Task Transfer Through Indirect Encoding

    Get PDF
    An important goal for machine learning is to transfer knowledge between tasks. For example, learning to play RoboCup Keepaway should contribute to learning the full game of RoboCup soccer. Often approaches to task transfer focus on transforming the original representation to fit the new task. Such representational transformations are necessary because the target task often requires new state information that was not included in the original representation. In RoboCup Keepaway, changing from the 3 vs. 2 variant of the task to 4 vs. 3 adds state information for each of the new players. In contrast, this dissertation explores the idea that transfer is most effective if the representation is designed to be the same even across different tasks. To this end, (1) the bird’s eye view (BEV) representation is introduced, which can represent different tasks on the same two-dimensional map. Because the BEV represents state information associated with positions instead of objects, it can be scaled to more objects without manipulation. In this way, both the 3 vs. 2 and 4 vs. 3 Keepaway tasks can be represented on the same BEV, which is (2) demonstrated in this dissertation. Yet a challenge for such representation is that a raw two-dimensional map is highdimensional and unstructured. This dissertation demonstrates how this problem is addressed naturally by the Hypercube-based NeuroEvolution of Augmenting Topologies (HyperNEAT) approach. HyperNEAT evolves an indirect encoding, which compresses the representation by exploiting its geometry. The dissertation then explores further exploiting the power of such encoding, beginning by (3) enhancing the configuration of the BEV with a focus on iii modularity. The need for further nonlinearity is then (4) investigated through the addition of hidden nodes. Furthermore, (5) the size of the BEV can be manipulated because it is indirectly encoded. Thus the resolution of the BEV, which is dictated by its size, is increased in precision and culminates in a HyperNEAT extension that is expressed at effectively infinite resolution. Additionally, scaling to higher resolutions through gradually increasing the size of the BEV is explored. Finally, (6) the ambitious problem of scaling from the Keepaway task to the Half-field Offense task is investigated with the BEV. Overall, this dissertation demonstrates that advanced representations in conjunction with indirect encoding can contribute to scaling learning techniques to more challenging tasks, such as the Half-field Offense RoboCup soccer domain

    Evolving Static Representations for Task Transfer

    Get PDF
    An important goal for machine learning is to transfer knowledge between tasks. For example, learning to play RoboCup Keepaway should contribute to learning the full game of RoboCup soccer. Previous approaches to transfer in Keepaway have focused on transforming the original representation to fit the new task. In contrast, this paper explores the idea that transfer is most effective if the representation is designed to be the same even across different tasks. To demonstrate this point, a bird\u27s eye view (BEV) representation is introduced that can represent different tasks on the same two-dimensional map. For example, both the 3 vs. 2 and 4 vs. 3 Keepaway tasks can be represented on the same BEV. Yet the problem is that a raw two-dimensional map is high-dimensional and unstructured. This paper shows how this problem is addressed naturally by an idea from evolutionary computation called indirect encoding, which compresses the representation by exploiting its geometry. The result is that the BEV learns a Keepaway policy that transfers without further learning or manipulation. It also facilitates transferring knowledge learned in a different domain, Knight Joust, into Keepaway. Finally, the indirect encoding of the BEV means that its geometry can be changed without altering the solution. Thus static representations facilitate several kinds of transfer

    Transfer Learning Through Indirect Encoding

    No full text
    An important goal for the generative and developmental systems (GDS) community is to show that GDS approaches can compete with more mainstream approaches in machine learning (ML). One popular ML domain is RoboCup and its subtasks (e.g. Keepaway). This paper shows how a GDS approach called HyperNEAT competes with the best results to date in Keepaway. Furthermore, a significant advantage of GDS is shown to be in transfer learning. For example, playing Keepaway should contribute to learning the full game of soccer. Previous approaches to transfer have focused on transforming the original representation to fit the new task. In contrast, this paper explores transfer with a representation designed to be the same even across different tasks. A bird\u27s eye view (BEV) representation is introduced that can represent different tasks on the same two-dimensional map. Yet the problem is that a raw two-dimensional map is high-dimensional and unstructured. The problem is addressed naturally by indirect encoding, which compresses the representation in HyperNEAT by exploiting its geometry. The result is that the BEV learns a Keepaway policy that transfers from two different training domains without further learning or manipulation. The results in this paper thus show the power of GDS versus other ML methods. Copyright 2010 ACM

    Constraining connectivity to encourage modularity in hyperneat

    No full text
    A challenging goal of generative and developmental systems (GDS) is to effectively evolve neural networks as complex and capable as those found in nature. Two key properties of neural structures in nature are regularity and modularity. While HyperNEAT has proven capable of generating neural network connectivity patterns with regularities, its ability to evolve modularity remains in question. This paper investigates how altering the traditional approach to determining whether 1 connections are expressed in HyperNEAT influences modularity. In particular, an extension is introduced called a Link Expression Output (HyperNEAT-LEO) that allows HyperNEAT to evolve the pattern of weights independently from the pattern of connection expression. Because HyperNEAT evolves such patterns as functions of geometry, important general topographic principles for organizing connectivity can be seeded into the initial population. For example, a key topographic concept in nature that encourages modularity is locality, that is, components of a module are located near each other. As experiments in this paper show, by seeding HyperNEAT with a bias towards local connectivity implemented through the LEO, modular structures arise naturally. Thus this paper provides an important clue to how an indirect encoding of network structure can be encouraged to evolve modularity.

    Trust In Sparse Supervisory Control

    No full text
    Command and Control (C2) is the practice of directing teams of autonomous units, regardless if the units are directed by human decision making or are unmanned. Humans are adaptive and their behaviors are recognizable to their superiors who were educated in a similar manner. This paper describes the sparse supervisory control that must be exercised over teams of highly autonomous units, and considers what it means for a commander to supervise autonomous un-manned systems (AUS) that employ machine learning and cooperative autonomy. Commanders must decide whether to trust behaviors they have never seen before, and developing that trust may require several strategies. This paper describes some possible strategies in an effort to explain the challenges that must be solved. © 2013, Association for the Advancement of artificial intelligence

    Focused Nash Memory for Coevolution

    No full text
    Learning invariant features in a set of similar fitness landscapes for speedu
    corecore